Philosophy:Foundations of statistics
Statistics models the collection, organization, analysis, interpretation, presentation of data, and is used to solve mathematical problems. Conclusions drawn from statistical analysis typically contain uncertainties, certainties or as they represent the probability of an event occurring. Statistics is fundamental to disciplines of science that involve predicting or classifying events based on a large set of data and is an integral part of fields such as machine learning, bioinformatics, genomics, and economics.
Statistics also encompasses the identification and study of statistical laws, which are statistical behaviors observed over a variety of datasets.[1] One common example is the Pareto Principle, which states that roughly 80% of effects are the result of 20% of causes, and is sometimes abbreviated as the 80/20 rule.[2]
Statistical inference addresses various issues, including Bayesian inference versus frequentist inference; the distinction between Fisher's "significance testing" and the Neyman-Pearson "hypothesis testing"; and whether the likelihood principle should be followed. Some of these issues have been subject to unresolved debate for up to two centuries.[3]
Bandyopadhyay & Forster[4] describe four statistical paradigms: classical statistics (or error statistics), Bayesian statistics, likelihood-based statistics, and the use of the Akaike Information Criterion as a statistical basis. More recently, Judea Pearl reintroduced a formal mathematics for attributing causality in statistical systems that addresses fundamental limitations of both Bayesian and Neyman-Pearson methods.
Fisher's "significance testing" vs. Neyman–Pearson "hypothesis testing"
During the second quarter of the 20th century, the development of classical statistics led to the emergence of two competing models for inductive statistical testing.[5][6] The merits of these models were extensively debated[7] for over 25 years until Fisher's passing. Although a hybrid approach combining elements of both methods is commonly taught and utilized, the philosophical questions raised during the debate remain unresolved.
Significance testing
Fisher played a significant role in popularizing significance testing through his publications, such as "Statistical Methods for Research Workers" in 1925 and "The Design of Experiments" in 1935.[8] His aim was to achieve scientific experimental outcomes without bias from prior opinions. Significance testing is a probabilistic form of deductive inference, akin to modus tollens. A simplified statement of the test can be described as follows: "If the evidence contradicts the hypothesis to a sufficient degree, the hypothesis is rejected." In practice, a statistic is computed based on the experimental data, and the probability of obtaining a value greater than that statistic under a default or "null" model is compared to a predetermined threshold. This threshold represents the level of discord required (typically established by convention). One common application of this method is to determine whether a treatment has a noticeable effect based on a comparative experiment. In this case, the null hypothesis corresponds to the absence of a treatment effect, implying that the treated group and the control group are drawn from the same population. Statistical significance measures probability and does not address practical significance. It can be viewed as a criterion for the statistical signal-to-noise ratio. It is important to note that the test cannot prove the hypothesis (of no treatment effect), but it can provide evidence against it. The method relies on formulating an imaginary infinite population, representing the null hypothesis, within a specified statistical model.
The Fisherian significance test involves a single hypothesis, but the choice of the test statistic requires an understanding of relevant directions of deviation from the hypothesized model.
Hypothesis testing
Neyman and Pearson collaborated on the problem of selecting the most appropriate hypothesis based solely on experimental evidence, which differed from significance testing. Their most renowned joint paper, published in 1933,[9] introduced the Neyman-Pearson lemma, which states that a ratio of probabilities serves as an effective criterion for hypothesis selection (with the choice of the threshold being arbitrary). The paper demonstrated the optimality of the Student's t-test, one of the significance tests. Neyman believed that hypothesis testing represented a generalization and improvement of significance testing. The rationale for their methods can be found in their collaborative papers.[10]
Hypothesis testing involves considering multiple hypotheses and selecting one among them, akin to making a multiple-choice decision. The absence of evidence is not an immediate factor to be taken into account. The method is grounded in the assumption of repeated sampling from the same population (the classical frequentist assumption), although Fisher criticized this assumption (Rubin, 2020).[11]
Grounds of disagreement
The duration of the dispute allowed for a comprehensive discussion of various fundamental issues in the field of statistics.
An example exchange from 1955–1956
Fisher's attack[12]
Repeated sampling of the same population
- Such sampling is the basis of frequentist probability
- Fisher preferred fiducial inference
Type II errors
- Which result from an alternative hypothesis
Inductive behavior
- (Vs inductive reasoning, Fisher's preference)
Neyman's rebuttal[13]
Fisher's attack on inductive behavior has been largely successful because he selected the field of battle. While operational decisions are routinely made on a variety of criteria (such as cost), scientific conclusions from experimentation are typically made based on probability alone. Fisher's theory of fiducial inference is flawed
- Paradoxes are common
A purely probabilistic theory of tests requires an alternative hypothesis
Fisher's attacks on type II errors have faded with time. In the intervening years, statistics have separated the exploratory from the confirmatory. In the current environment, the concept of type II errors are used in power calculations for confirmatory hypothesis tests sample size determination.
Discussion
Fisher's attack based on frequentist probability failed but was not without result. He identified a specific case (2×2 table) where the two schools of testing reach different results. This case is one of several that are still troubling. Commentators believe that the "right" answer is context-dependent.[14] Fiducial probability has not fared well, being virtual without advocates, while frequentist probability remains a mainstream interpretation.
Fisher's attacks on type II errors have faded with time. In the intervening years, statistics have separated the exploratory from the confirmatory. In the current environment, the concept of type II errors are used in power calculations for confirmatory hypothesis tests sample size determination.
Fisher's attack on inductive behavior has been largely successful because he selected the field of battle. While ''operational decisions'' are routinely made on a variety of criteria (such as cost), ''scientific conclusions'' from experimentation are typically made based on probability alone.
During this exchange, Fisher also discussed the requirements for inductive inference, specifically criticizing cost functions that penalize erroneous judgments. Neyman countered by mentioning the use of such functions by Gauss and Laplace. These arguments occurred 15 years after textbooks began teaching a hybrid theory of statistical testing.
Fisher and Neyman held different perspectives on the foundations of statistics (though they both opposed the Bayesian viewpoint):[14]
- The interpretation of probability
- The disagreement between Fisher's inductive reasoning and Neyman's inductive behavior reflected the Bayesian-Frequentist divide. Fisher was willing to revise his opinion (reaching a provisional conclusion) based on calculated probability, while Neyman was more inclined to adjust his observable behavior (making a decision) based on computed costs.
- The appropriate formulation of scientific questions, with a particular focus on modeling[7][15]
- Whether it is justifiable to reject a hypothesis based on a low probability without knowing the probability of an alternative
- Whether a hypothesis could ever be accepted based solely on data
- In mathematics, deduction proves, while counter-examples disprove.
- In the Popperian philosophy of science, progress is made when theories are disproven.
- Subjectivity: Although Fisher and Neyman endeavored to minimize subjectivity, they both acknowledged the significance of "good judgment." Each accused the other of subjectivity.
- Fisher subjectively selected the null hypothesis.
- Neyman-Pearson subjectively determined the criterion for selection (which was not limited to probability).
- Both subjectively established numeric thresholds.
Fisher and Neyman diverged in their attitudes and, perhaps, their language. Fisher was a scientist and an intuitive mathematician, and inductive reasoning came naturally to him. Neyman, on the other hand, was a rigorous mathematician who relied on deductive reasoning rather than probability calculations based on experiments.[5] Hence, there was an inherent clash between applied and theoretical approaches (between science and mathematics).
Related history
In 1938, Neyman relocated to the West Coast of the United States of America, effectively ending his collaboration with Pearson and their work on hypothesis testing.[5] Subsequent developments in the field were carried out by other researchers.
By 1940, textbooks began presenting a hybrid approach that combined elements of significance testing and hypothesis testing.[16] However, none of the main contributors were directly involved in the further development of the hybrid approach currently taught in introductory statistics.[6]
Statistics subsequently branched out into various directions, including decision theory, Bayesian statistics, exploratory data analysis, robust statistics, and non-parametric statistics. Neyman-Pearson hypothesis testing made significant contributions to decision theory, which is widely employed, particularly in statistical quality control. Hypothesis testing also extended its applicability to incorporate prior probabilities, giving it a Bayesian character. While Neyman -Pearson hypothesis testing has evolved into an abstract mathematical subject taught at the post-graduate level,[17] much of what is taught and used in undergraduate education under the umbrella of hypothesis testing can be attributed to Fisher.
Contemporary opinion
There have been no major conflicts between the two classical schools of testing in recent decades, although occasional criticism and disputes persist. However, it is highly unlikely that one theory of statistical testing will completely supplant the other in the foreseeable future.
The hybrid approach, which combines elements from both competing schools of testing, can be interpreted in different ways. Some view it as an amalgamation of two mathematically complementary ideas,[14] while others see it as a flawed union of philosophically incompatible concepts.[18] Fisher's approach had certain philosophical advantages, while Neyman and Pearson emphasized rigorous mathematics. Hypothesis testing remains a subject of controversy for some users, but the most widely accepted alternative method, confidence intervals, is based on the same mathematical principles.
Due to the historical development of testing, there is no single authoritative source that fully encompasses the hybrid theory as it is commonly practiced in statistics. Additionally, the terminology used in this context may lack consistency. Empirical evidence indicates that individuals, including students and instructors in introductory statistics courses, often have a limited understanding of the meaning of hypothesis testing.[19]
Summary
- The interpretation of probability remains unresolved, although fiducial probability is not widely embraced.
- Neither of the test methods has been completely abandoned, as they are extensively utilized for different objectives.
- Textbooks have integrated both test methods into the framework of hypothesis testing.
- Some mathematicians argue, with a few exceptions, that significance tests can be considered a specific instance of hypothesis tests.
- On the other hand, there are those who perceive these problems and methods as separate or incompatible.
- The ongoing dispute has had a negative impact on statistical education.
Bayesian inference versus frequentist inference
Two distinct interpretations of probability have existed for a long time, one based on objective evidence and the other on subjective degrees of belief. The debate between Gauss and Laplace could have taken place more than 200 years ago, giving rise to two competing schools of statistics. Classical inferential statistics emerged primarily during the second quarter of the 20th century,[6] largely in response to the controversial principle of indifference used in Bayesian probability at that time. The resurgence of Bayesian inference was a reaction to the limitations of frequentist probability, leading to further developments and reactions.
While the philosophical interpretations have a long history, the specific statistical terminology is relatively recent. The terms "Bayesian" and "frequentist" became standardized in the second half of the 20th century.[20] However, the terminology can be confusing, as the "classical" interpretation of probability aligns with Bayesian principles, while "classical" statistics follow the frequentist approach. Moreover, even within the term "frequentist," there are variations in interpretation, differing between philosophy and physics.
The intricate details of philosophical probability interpretations are explored elsewhere. In the field of statistics, these alternative interpretations allow for the analysis of different datasets using distinct methods based on various models, aiming to achieve slightly different objectives. When comparing the competing schools of thought in statistics, pragmatic criteria beyond philosophical considerations are taken into account.
Major contributors
Fisher and Neyman were significant figures in the development of frequentist (classical) methods.[5] While Fisher had a unique interpretation of probability that differed from Bayesian principles, Neyman adhered strictly to the frequentist approach. In the realm of Bayesian statistical philosophy, mathematics, and methods, de Finetti,[21] Jeffreys,[22] and Savage[23] emerged as notable contributors during the 20th century. Savage played a crucial role in popularizing de Finetti's ideas in English-speaking regions and establishing rigorous Bayesian mathematics. In 1965, Dennis Lindley's two-volume work titled "Introduction to Probability and Statistics from a Bayesian Viewpoint" played a vital role in introducing Bayesian methods to a wide audience. Over the course of three generations, statistics have progressed significantly, and the views of early contributors are not necessarily considered authoritative in present times.
Contrasting approaches
Frequentist inference
The earlier description briefly highlights frequentist inference, which encompasses Fisher's "significance testing" and Neyman-Pearson's "hypothesis testing." Frequentist inference incorporates various perspectives and allows for scientific conclusions, operational decisions, and parameter estimation with or without confidence intervals.
Bayesian inference
A classical frequency distribution provides information about the probability of the observed data. By applying Bayes' theorem, a more abstract concept is introduced, which involves estimating the probability of a hypothesis (associated with a theory) given the data. This concept, formerly referred to as "inverse probability," is realized through Bayesian inference. Bayesian inference involves updating the probability estimate for a hypothesis as new evidence becomes available. It explicitly considers both the evidence and prior beliefs, enabling the incorporation of multiple sets of evidence.
Comparisons of characteristics
Frequentists and Bayesians employ distinct probability models. Frequentists typically view parameters as fixed but unknown, whereas Bayesians assign probability distributions to these parameters. As a result, Bayesians discuss probabilities that frequentists do not acknowledge. Bayesians consider the probability of a theory, whereas true frequentists can only assess the evidence's consistency with the theory. For instance, a frequentist does not claim a 95% probability that the true value of a parameter falls within a confidence interval; rather, they state that 95% of confidence intervals encompass the true value.
Bayesian | Frequentist | |
---|---|---|
Basis | Belief (prior) | Behavior (method) |
Resulting Characteristic | Principled Philosophy | Opportunistic Methods |
Distributions | One distribution | Many distributions (bootstrap?) |
Ideal Application | Dynamic (repeated sampling) | Static (one sample) |
Target Audience | Individual (subjective) | Community (objective) |
Modeling Characteristic | Aggressive | Defensive |
Bayesian | Frequentist | |
---|---|---|
Strengths |
|
|
Weaknesses |
|
|
Mathematical results
Both the frequentist and Bayesian schools are subject to mathematical critique, and neither readily embraces such criticism. For instance, Stein's paradox highlights the intricacy of determining a "flat" or "uninformative" prior probability distribution in high-dimensional spaces.[3] While Bayesians perceive this as tangential to their fundamental philosophy, they find frequentism plagued with inconsistencies, paradoxes, and unfavorable mathematical behavior. Frequent travelers can account for most of these issues. Certain "problematic" scenarios, like estimating the weight variability of a herd of elephants based on a single measurement ("Basu's elephants"), exemplify extreme cases that defy statistical estimation. The principle of likelihood has been a contentious arena of debate.
Statistical results
Both the frequentist and Bayesian schools have demonstrated notable accomplishments in addressing practical challenges. Classical statistics, with its reliance on mechanical calculators and specialized printed tables, boasts a longer history of obtaining results. Bayesian methods, on the other hand, have shown remarkable efficacy in analyzing sequentially sampled information, such as radar and sonar data. Several Bayesian techniques, as well as certain recent frequentist methods like the bootstrap, necessitate the computational capabilities that have become widely accessible in the past few decades. There is an ongoing discourse regarding the integration of Bayesian and frequentist approaches,[25] although concerns have been raised regarding the interpretation of results and the potential diminishment of methodological diversity.
Philosophical results
Bayesians share a common stance against the limitations of frequentism, but they are divided into various philosophical camps (empirical, hierarchical, objective, personal, and subjective), each emphasizing different aspects. A philosopher of statistics from the frequentist perspective has observed a shift from the statistical domain to philosophical interpretations of probability over the past two generations.[27] Some perceive that the successes achieved with Bayesian applications do not sufficiently justify the associated philosophical framework.[28] Bayesian methods often develop practical models that deviate from traditional inference and have minimal reliance on philosophy.[29] Neither the frequentist nor the Bayesian philosophical interpretations of probability can be considered entirely robust. The frequentist view is criticized for being overly rigid and restrictive, while the Bayesian view can encompass both objective and subjective elements, among others.
Illustrative quotations
- "Carefully used, the frequentist approach yields broadly applicable if sometimes clumsy answers"[30]
- "To insist on unbiased [frequentist] techniques may lead to negative (but unbiased) estimates of variance; the use of p-values in multiple tests may lead to blatant contradictions; conventional 0.95 confidence regions may consist of the whole real line. No wonder that mathematicians find it often difficult to believe that conventional statistical methods are a branch of mathematics."[31]
- "Bayesianism is a neat and fully principled philosophy, while frequentism is a grab-bag of opportunistic, individually optimal, methods."[24]
- "In multiparameter problems flat priors can yield very bad answers"[30]
- "Bayes' rule says there is a simple, elegant way to combine current information with prior experience to state how much is known. It implies that sufficiently good data will bring previously disparate observers to an agreement. It makes full use of available information, and it produces decisions having the least possible error rate."[32]
- "Bayesian statistics is about making probability statements, frequentist statistics is about evaluating probability statements."[33]
- "Statisticians are often put in a setting reminiscent of Arrow’s paradox, where we are asked to provide estimates that are informative and unbiased and confidence statements that are correct conditional on the data and also on the underlying true parameter."[33] (These are conflicting requirements.)
- "Formal inferential aspects are often a relatively small part of statistical analysis"[30]
- "The two philosophies, Bayesian and frequentist, are more orthogonal than antithetical."[24]
- "A hypothesis that may be true is rejected because it has failed to predict observable results that have not occurred. This seems a remarkable procedure."[22]
Summary
- Bayesian theory has a mathematical advantage.
- Frequentist probability has existence and consistency problems.
- But finding good priors to apply Bayesian theory remains (very?) difficult.
- Both theories have impressive records of successful application.
- Neither the philosophical interpretation of probability nor its support is robust.
- There is increasing skepticism about the connection between application and philosophy.
- Some statisticians are recommending active collaboration (beyond a cease-fire).
The likelihood principle
In common usage, likelihood is often considered synonymous with probability. However, according to statistics, this is not the case. In statistics, probability refers to variable data given a fixed hypothesis, whereas likelihood refers to variable hypotheses given a fixed set of data. For instance, when making repeated measurements with a ruler under fixed conditions, each set of observations corresponds to a probability distribution, and the observations can be seen as a sample from that distribution, following the frequentist interpretation of probability. On the other hand, a set of observations can also arise from sampling various distributions based on different observational conditions. The probabilistic relationship between a fixed sample and a variable distribution stemming from a variable hypothesis is referred to as likelihood, representing the Bayesian view of probability. For instance, a set of length measurements may represent readings taken by observers with specific characteristics and conditions.
Likelihood is a concept that was introduced and developed by Fisher over a span of more than 40 years, although earlier references to the concept exist and Fisher's support for it was not wholehearted.[34] The concept was subsequently accepted and substantially revised by Jeffreys.[35] In 1962, Birnbaum "proved" the likelihood principle based on premises that were widely accepted among statisticians,[36] although his proof has been subject to dispute by statisticians and philosophers. Notably, by 1970, Birnbaum had rejected one of these premises (the conditionality principle) and had also abandoned the likelihood principle due to their incompatibility with the frequentist "confidence concept of statistical evidence."[37][38] The likelihood principle asserts that all the information in a sample is contained within the likelihood function, which is considered a valid probability distribution by Bayesians but not by frequentists.
Certain significance tests employed by frequentists are not consistent with the likelihood principle. Bayesians, on the other hand, embrace the principle as it aligns with their philosophical standpoint (perhaps in response to frequentists' discomfort). The likelihood approach is compatible with Bayesian statistical inference, where the posterior Bayes distribution for a parameter is derived by multiplying the prior distribution by the likelihood function using Bayes's Theorem.[34] Frequentists interpret the likelihood principle unfavorably, as it suggests a lack of concern for the reliability of evidence. The likelihood principle, according to Bayesian statistics, implies that information about the experimental design used to collect evidence does not factor into the statistical analysis of the data.[39] Some Bayesians, including Savage,[citation needed] acknowledge this implication as a vulnerability.
The likelihood principle's staunchest proponents argue that it provides a more solid foundation for statistics compared to the alternatives presented by Bayesian and frequentist approaches.[40] These supporters include some statisticians and philosophers of science.[41] While Bayesians recognize the importance of likelihood for calculations, they contend that the posterior probability distribution serves as the appropriate basis for inference.[42]
Modeling
Inferential statistics relies on statistical models. Classical hypothesis testing, for instance, has often relied on the assumption of data normality. To reduce reliance on this assumption, robust and nonparametric statistics have been developed. Bayesian statistics, on the other hand, interpret new observations based on prior knowledge, assuming continuity between the past and present. The experimental design assumes some knowledge of the factors to be controlled, varied, randomized, and observed. Statisticians are aware of the challenges in establishing causation, often stating that "correlation does not imply causation," which is more of a limitation in modeling than a mathematical constraint.
As statistics and data sets have become more complex,[lower-alpha 1][lower-alpha 2] questions have arisen regarding the validity of models and the inferences drawn from them. There is a wide range of conflicting opinions on modeling.
Models can be based on scientific theory or ad hoc data analysis, each employing different methods. Advocates exist for each approach.[44] Model complexity is a trade-off and less subjective approaches such as the Akaike information criterion and Bayesian information criterion aim to strike a balance.[45]
Concerns have been raised even about simple regression models used in the social sciences, as a multitude of assumptions underlying model validity are often neither mentioned nor verified. In some cases, a favourable comparison between observations and the model is considered sufficient.[46]
Traditional observation-based models often fall short in addressing many significant problems, requiring the utilization of a broader range of models, including algorithmic ones. "If the model is a poor emulation of nature, the conclusions may be wrong."[47]
Modeling is frequently carried out inadequately, with improper methods employed, and the reporting of models is often subpar.[48]
Given the lack of a strong consensus on the philosophical review of statistical modeling, many statisticians adhere to the cautionary words of George Box: "All models are wrong, but some are useful."
Other reading
For a concise introduction to the fundamentals of statistics, refer to Stuart, A.; Ord, J.K. (1994). "Ch. 8 – Probability and statistical inference" in Kendall's Advanced Theory of Statistics, Volume I: Distribution Theory (6th ed.), published by Edward Arnold.
In his book Statistics as Principled Argument, Robert P. Abelson presents the perspective that statistics serve as a standardized method for resolving disagreements among scientists, who could otherwise engage in endless debates about the merits of their respective positions. From this standpoint, statistics can be seen as a form of rhetoric. However, the effectiveness of statistical methods depends on the consensus among all involved parties regarding the chosen approach.[49]
See also
- Philosophy of statistics
- History of statistics
- Philosophy of probability
- Philosophy of mathematics
- Philosophy of science
- Evidence
- Likelihoodist statistics
- Probability interpretations
- Founders of statistics
Footnotes
- ↑ Some large models attempt to predict the behavior of voters in the United States of America. The population is around 300 million. Each voter may be influenced by many factors. For some of the complications of voter behavior (most easily understood by the natives) see: Gelman[43]
- ↑ Efron (2013) mentions millions of data points and thousands of parameters from scientific studies.[24]
Citations
- ↑ Kitcher & Salmon (2009) p.51
- ↑ Bunkley, Nick (2008-03-03). "Joseph Juran, 103, Pioneer in Quality Control, Dies". The New York Times. ISSN 0362-4331. https://www.nytimes.com/2008/03/03/business/03juran.html.
- ↑ 3.0 3.1 Efron 1978.
- ↑ Bandyopadhyay & Forster 2011.
- ↑ 5.0 5.1 5.2 5.3 Lehmann 2011.
- ↑ 6.0 6.1 6.2 Gigerenzer et al. 1989.
- ↑ 7.0 7.1 Louçã 2008.
- ↑ Fisher 1956.
- ↑ Neyman & Pearson 1933.
- ↑ Neyman & Pearson 1967.
- ↑ Rubin, M (2020). ""Repeated sampling from the same population?" A critique of Neyman and Pearson's responses to Fisher". European Journal for Philosophy of Science 10 (42): 1–15. doi:10.1007/s13194-020-00309-6. https://psyarxiv.com/23esz/download.
- ↑ Fisher 1955.
- ↑ Neyman 1956.
- ↑ 14.0 14.1 14.2 Lehmann 1993.
- ↑ Lenhard 2006.
- ↑ Halpin & Stam 2006.
- ↑ Lehmann & Romano 2005.
- ↑ Hubbard & Bayarri c. 2003.
- ↑ Sotos et al. 2007.
- ↑ Fienberg 2006.
- ↑ de Finetti 1964.
- ↑ 22.0 22.1 Jeffreys 1939.
- ↑ Savage 1972.
- ↑ 24.0 24.1 24.2 24.3 Efron 2013.
- ↑ 25.0 25.1 Little 2006.
- ↑ Yu 2009.
- ↑ Mayo 2013.
- ↑ Senn 2011.
- ↑ Gelman & Shalizi 2012.
- ↑ 30.0 30.1 30.2 Cox 2005.
- ↑ Bernardo 2008.
- ↑ Kass c. 2012.
- ↑ 33.0 33.1 Gelman 2008.
- ↑ 34.0 34.1 Edwards 1999.
- ↑ Aldrich 2002.
- ↑ Birnbaum 1962.
- ↑ Birnbaum, A., (1970) Statistical Methods in Scientific Inference. Nature, 225, 14 March 1970, pp.1033.
- ↑ Giere, R. (1977) Allan Birnbaum's Conception of Statistical Evidence. Synthese, 36, pp.5-13.
- ↑ Backe 1999.
- ↑ Forster & Sober 2001.
- ↑ Royall 1997.
- ↑ Lindley 2000.
- ↑ Gelman. "Red-blue talk UBC.". Columbia U.. http://www.stat.columbia.edu/~gelman/presentations/redbluetalkubc.pdf.
- ↑ Tabachnick & Fidell 1996.
- ↑ Forster & Sober 1994.
- ↑ Freedman 1995.
- ↑ Breiman 2001.
- ↑ Chin n.d.
- ↑ Abelson, Robert P. (1995). Statistics as Principled Argument. Lawrence Erlbaum Associates. ISBN 978-0-8058-0528-4. "... the purpose of statistics is to organize a useful argument from quantitative evidence, using a form of principled rhetoric."
References
- Aldrich, John (2002). "How likelihood and identification went Bayesian". International Statistical Review 70 (1): 79–98. doi:10.1111/j.1751-5823.2002.tb00350.x. http://eprints.soton.ac.uk/33104/1/0111.pdf.
- Backe, Andrew (1999). "The likelihood principle and the reliability of experiments". Philosophy of Science 66: S354–S361. doi:10.1086/392737.
- Bandyopadhyay, Prasanta, ed (2011). Philosophy of statistics. Handbook of the Philosophy of Science. 7. Oxford: North-Holland. ISBN 978-0444518620. The text is a collection of essays.
- Berger, James O. (2003). "Could Fisher, Jeffreys and Neyman Have Agreed on Testing?". Statistical Science 18 (1): 1–32. doi:10.1214/ss/1056397485.
- Bernardo, Jose M. (2008). "Comment on Article by Gelman". Bayesian Analysis 3 (3): 453. doi:10.1214/08-BA318REJ.
- Birnbaum, A. (1962). "On the foundations of statistical inference". J. Amer. Statist. Assoc. 57 (298): 269–326. doi:10.1080/01621459.1962.10480660.
- Breiman, Leo (2001). "Statistical Modeling: The Two Cultures". Statistical Science 16 (3): 199–231. doi:10.1214/ss/1009213726.
- Chin, Wynne W. (n.d.). "Structural Equation Modeling in IS Research - Understanding the LISREL and PLS perspective". http://disc-nt.cba.uh.edu/chin/ais/. University of Houston lecture notes?
- Cox, D. R. (2005). "Frequentist and Bayesian Statistics: a Critique". PHYSTAT05.
- de Finetti, Bruno (1964). "Foresight: its Logical laws, its Subjective Sources". in Kyburg, H. E.. Studies in Subjective Probability. H. E. Smokler. New York: Wiley. pp. 93–158. Translation of the 1937 French original with later notes added.
- Edwards, A.W.F. (1999). "Likelihood". http://www.cimat.mx/reportes/enlinea/D-99-10.html. Preliminary version of an article for the International Encyclopedia of the Social and Behavioral Sciences.
- Efron, Bradley (2013). "A 250 year argument: Belief, behavior, and the bootstrap". Bulletin of the American Mathematical Society. New Series 50 (1): 129–146. doi:10.1090/s0273-0979-2012-01374-5.
- Efron, Bradley (1978). "Controversies in the foundations of statistics". The American Mathematical Monthly 85 (4): 231–246. doi:10.2307/2321163. http://mathdl.maa.org/images/upload_library/22/Ford/BradleyEfron.pdf. Retrieved 2012-11-01.
- Fienberg, Stephen E. (2006). "When did Bayesian inference become "Bayesian"?". Bayesian Analysis 1 (1): 1–40. doi:10.1214/06-ba101.
- Fisher, R.A. (1925). Statistical Methods for Research Workers. Edinburgh: Oliver and Boyd.
- Fisher, Ronald A., Sir (1935). Design of Experiments. Edinburgh: Oliver and Boyd. https://archive.org/details/in.ernet.dli.2015.502684.
- Fisher, R. (1955). "Statistical Methods and Scientific Induction". Journal of the Royal Statistical Society, Series B 17 (1): 69–78. http://www.phil.vt.edu/dmayo/PhilStatistics/Triad/Fisher%201955.pdf.
- Fisher, Ronald A., Sir (1956). The logic of scientific inference. Edinburgh: Oliver and Boyd.
- Forster, Malcolm; Sober, Elliott (1994). "How to tell when simpler, more unified, or less ad-hoc theories will provide more accurate predictions". British Journal for the Philosophy of Science 45 (1): 1–36. doi:10.1093/bjps/45.1.1.
- Forster, Malcolm; Sober, Elliott (2001). "Why likelihood". Likelihood and Evidence: 89–99.
- Freedman, David (March 1995). "Some issues in the foundation of statistics". Foundations of Science 1 (1): 19–39. doi:10.1007/BF00208723.
- Gelman, Andrew (2008). "Rejoinder". Bayesian Analysis 3 (3): 467–478. doi:10.1214/08-BA318REJ. – A joke escalated into a serious discussion of Bayesian problems by 5 authors (Gelman, Bernardo, Kadane, Senn, Wasserman) on pages 445-478.
- Gelman, Andrew; Shalizi, Cosma Rohilla (2012). "Philosophy and the practice of Bayesian statistics". British Journal of Mathematical and Statistical Psychology 66 (1): 8–38. doi:10.1111/j.2044-8317.2011.02037.x. PMID 22364575.
- Gigerenzer, Gerd; Swijtink, Zeno; Porter, Theodore; Daston, Lorraine; Beatty, John; Kruger, Lorenz (1989). "Part 3: The Inference Experts". The Empire of Chance: How Probability Changed Science and Everyday Life. Cambridge University Press. pp. 70–122. ISBN 978-0-521-39838-1.
- Halpin, P.F.; Stam, H.J. (Winter 2006). "Inductive Inference or Inductive Behavior: Fisher and Neyman: Pearson Approaches Statistical Testing in Psychological Research (1940–1960)". The American Journal of Psychology 119 (4): 625–653. doi:10.2307/20445367. PMID 17286092.
- Hubbard, Raymond; Bayarri, M.J. (c. 2003). "P-values are not error probabilities". http://ftp.isds.duke.edu/WorkingPapers/03-26.pdf. – A working paper that explains the difference between Fisher's evidential p-value and the Neyman–Pearson type I error rate [math]\displaystyle{ \alpha }[/math].
- Jeffreys, H. (1939). The theory of probability. Oxford University Press.
- Kass (c. 2012). "Why is it that Bayes' rule has not only captured the attention of so many people but inspired a religious devotion and contentiousness, repeatedly, across many years?". http://www.stat.cmu.edu/~kass/papers/about-bayes-rule.pdf.
- Lehmann, E. L. (December 1993). "The Fisher, Neyman–Pearson theories of testing hypotheses: One theory or two?". Journal of the American Statistical Association 88 (424): 1242–1249. doi:10.1080/01621459.1993.10476404.
- Lehmann, E. L. (2011). Fisher, Neyman, and the creation of classical statistics. New York: Springer. ISBN 978-1441994998.
- Lehmann, E.L.; Romano, Joseph P. (2005). Testing Statistical Hypotheses (3rd ed.). New York: Springer. ISBN 978-0-387-98864-1.
- Lenhard, Johannes (2006). "Models and Statistical Inference: The Controversy between Fisher and Neyman–Pearson". Br. J. Philos. Sci. 57 (1): 69–91. doi:10.1093/bjps/axi152.
- Lindley, D.V. (2000). "The philosophy of statistics". Journal of the Royal Statistical Society, Series D 49 (3): 293–337. doi:10.1111/1467-9884.00238.
- Little, Roderick J. (2006). "Calibrated Bayes: A Bayes / frequentist roadmap". The American Statistician 60 (3): 213–223. doi:10.1198/000313006X117837.
- Louçã, Francisco (2008). "Should The Widest Cleft in Statistics-How and Why Fisher opposed Neyman and Pearson". http://www.repository.utl.pt/bitstream/10400.5/2327/1/wp022008.pdf. Working paper contains numerous quotations from the sources of the dispute.
- Mayo, Deborah G. (February 2013). "Discussion: Bayesian Methods: Applied? Yes. Philosophical Defense? In Flux". The American Statistician 67 (1): 11–15. doi:10.1080/00031305.2012.752410.
- Neyman, J.; Pearson, E. S. (January 1, 1933). "On the problem of the most efficient tests of statistical hypotheses". Phil. Trans. R. Soc. Lond. A 231 (694–706): 289–337. doi:10.1098/rsta.1933.0009. Bibcode: 1933RSPTA.231..289N.
- Neyman, J.; Pearson, E. S. (1967). Joint statistical papers of J. Neyman and E.S. Pearson. Cambridge University Press.
- Neyman, Jerzy (1956). "Note on an Article by Sir Ronald Fisher". Journal of the Royal Statistical Society, Series B 18 (2): 288–294.
- Royall, Richard (1997). Statistical Evidence: a likelihood paradigm. Chapman & Hall. ISBN 978-0412044113. https://archive.org/details/statisticalevide0000roya.
- Savage, L.J. (1972). Foundations of Statistics (second ed.).
- Senn, Stephen (2011). "You may believe you are a Bayesian but you are probably wrong". RMM 2: 48–66.
- Sotos, Ana Elisa Castro; van Hoof, Stijn; van den Noortgate, Wim; Onghena, Patrick (2007). "Students' misconceptions of statistical inference: A review of the empirical evidence from research on statistics education". Educational Research Review 2 (2): 98–113. doi:10.1016/j.edurev.2007.04.001. https://lirias.kuleuven.be/handle/123456789/136347.
- Stuart, A.; Ord, J.K. (1994). Kendall's Advanced Theory of Statistics. I: Distribution Theory. Edward Arnold.
- Tabachnick, Barbara G.; Fidell, Linda S. (1996). Using Multivariate Statistics (3rd ed.). HarperCollins College Publishers. ISBN 978-0-673-99414-1. "Principal components is an empirical approach while factor analysis and structural equation modeling tend to be theoretical approaches.(p 27)"
- Yu, Yue (2009). "Bayesian vs. Frequentist". http://imyy.net/research/BSTT566__Slides.pdf. – Lecture notes? University of Illinois at Chicago
Further reading
- Barnett, Vic (1999). Comparative Statistical Inference (3rd ed.). Wiley. ISBN 978-0-471-97643-1.
- Cox, David R. (2006). Principles of Statistical Inference. Cambridge University Press. ISBN 978-0-521-68567-2.
- Efron, Bradley (1986), "Why isn't everyone a Bayesian? (with discussion)", The American Statistician 40 (1): 1–11, doi:10.2307/2683105.
- Good, I. J. (1988), "The interface between statistics and philosophy of science", Statistical Science 3 (4): 386–397, doi:10.1214/ss/1177012754
- Kadane, J.B.; Schervish, M.J.; Seidenfeld, T. (1999). Rethinking the Foundations of Statistics. Cambridge University Press. Bibcode: 1999rfs..book.....K. – Bayesian.
- Mayo, Deborah G. (1992), "Did Pearson reject the Neyman–Pearson philosophy of statistics?", Synthese 90 (2): 233–262, doi:10.1007/BF00485352.
External links
- "Interpretations of Probability". Probability interpretation. Stanford Encyclopedia of Philosophy. Palo Alto, CA: Stanford University. 2019. http://plato.stanford.edu/entries/probability-interpret/.
- Philosophy of statistics. Stanford Encyclopedia of Philosophy. Palo Alto, CA: Stanford University. 2022. http://plato.stanford.edu/entries/statistics/.
Original source: https://en.wikipedia.org/wiki/Foundations of statistics.
Read more |